7 research outputs found

    Constrained Optimization in Random Simulation:Efficient Global Optimization and Karush-Kuhn-Tucker Conditions

    Get PDF
    We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation; i.e., our method minimizes the goal output subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "effciient global optimization" (EGO), which is also known as "Bayesian optimization" and is related to “active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model, treating the simulation as a black box. Though there are many variants of EGO - for these non-constrained deterministic problems and for variants of these problems - none of these EGO-variants use the KKT conditions - even though these conditions are well-known (first-order necessary) optimality conditions in white-box problems. Because the simulation is random, we apply stochastic Kriging. Furthermore, we allow for variance heterogeneity and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. Moreover, our algorithm can take advantage of parallel computing. We numerically compare the performance of our algorithm and the popular proprietary OptQuest algorithm, in two familiar examples (namely, a mathematical toy example and a practical inventory system with a service-level constraint); we conclude that our algorithm is more efficient (requires fewer expensive simulation runs) and effective (gives better estimates of the true global optimum)

    Constrained Optimization in Random Simulation: Efficient Global Optimization and Karush-Kuhn-Tucker Conditions

    No full text
    We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation; i.e., our method minimizes the goal output subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "effciient global optimization" (EGO), which is also known as "Bayesian optimization" and is related to “active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model, treating the simulation as a black box. Though there are many variants of EGO - for these non-constrained deterministic problems and for variants of these problems - none of these EGO-variants use the KKT conditions - even though these conditions are well-known (first-order necessary) optimality conditions in white-box problems. Because the simulation is random, we apply stochastic Kriging. Furthermore, we allow for variance heterogeneity and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. Moreover, our algorithm can take advantage of parallel computing. We numerically compare the performance of our algorithm and the popular proprietary OptQuest algorithm, in two familiar examples (namely, a mathematical toy example and a practical inventory system with a service-level constraint); we conclude that our algorithm is more efficient (requires fewer expensive simulation runs) and effective (gives better estimates of the true global optimum)

    Constrained optimization in Random Simulation: Efficient Global Optimization and Karush-Kuhn-Tucker Conditions (Revision of CentER DP 2022-022)

    No full text
    We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation. In these problems, the goal output is minimized, subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "efficient global optimization" (EGO); EGO is also known as "Bayesian optimization" and is related to \active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model; EGO treats the simulation as a black box. Though there are many variants of EGO, no variant uses the KKT conditions|even though these conditions are well-known (first-order necessary) optimality conditions in white-box mathematical optimization. Because we assume that the simulation is random, we apply stochastic Kriging. Furthermore, we admit variance heterogeneity, and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. We numerically compare the performance of our algorithm and three alternative algorithms, in four familiar examples. We conclude that our algorithm is often more efficient (fewer expensive simulation runs) and effective (better estimates of the true global optimum)
    corecore